The New Fire by Ben Buchanan & Andrew Imbrie
Author:Ben Buchanan & Andrew Imbrie [Buchanan, Ben & Imbrie, Andrew]
Language: eng
Format: epub
Publisher: MIT Press
Published: 2022-02-15T00:00:00+00:00
The Cat-and-Mouse Game Expands
In 2012, several years before the Cyber Grand Challenge and before Ian Goodfellow devised GANs, he received an email from his doctoral advisor at the University of Montreal, Yoshua Bengio. Like Geoffrey Hinton, Bengio had earned a reputation as one of the pioneers of machine learning.54 In this email to Goodfellow, Bengio raised a limitation of neural networks that was of growing research interest and concern: they could be hacked.55
Bengio introduced Goodfellow to a paper in progress by Christian Szegedy, an employee at Google. Szegedy was among the first to discover that adversaries could tweak the input to a trained neural network in a way that no human could spot but that would cause the machine learning system to fail. Altering just a few pixels in an image of a school bus, for example, could cause a machine learning system to instead identify it as an ostrich, even though a human who compared the altered and unaltered pictures would see no discernible difference.
Goodfellow assisted Szegedy with some of the research for the paper. He helped coin the term âadversarial examplesâ to refer to inputs crafted to fool machine learning systems.56 As the two researchers began to explore adversarial examples further, they were struck by what they found. Whereas some kinds of software vulnerabilities, such as the ones later exploited in the Cyber Grand Challenge, were straightforward to fix once they were discovered, adversarial examples seemed to arise from an intrinsic weakness of the neural networks themselves. Changing a neural network to defend against one adversarial example often made the network more vulnerable to another example.
Goodfellow and others knew that the structure of a neural network and the configuration of its parametersâdetermined by the training process described in chapter 1âshaped how data cascaded through it. In AlexNet or another image classifier, the input data arrived in the first layer of the network and then, depending on the strength of the connections between neurons, moved through the layers before arriving at the output layer. The output layer expressed the neural networkâs identification of the image; a system designed to tell tanks from jeeps would usually have two neurons in the output layer, one for each possibility. If the network classified a particular picture as a tank, the tank neuron would contain a high value and the jeep neuron a low one. Machine learning scientists and the training process for each neural network aimed to produce systems that reliably generated the right outputs for each input.
Goodfellow and others realized that hackers could turn this process on its head to craft adversarial examples that fooled neural networks. To dupe a machine learning system into thinking a picture of a tank was a picture of a jeep, a hacker could begin with a picture of a tank that the trained machine learning system correctly recognized. The hacker could then make slight perturbations to the image, changing just a pixel or two, and observe how the neural network responded. If the small change
Download
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.
Exploring Deepfakes by Bryan Lyon and Matt Tora(7709)
Robo-Advisor with Python by Aki Ranin(7607)
Offensive Shellcode from Scratch by Rishalin Pillay(6094)
Microsoft 365 and SharePoint Online Cookbook by Gaurav Mahajan Sudeep Ghatak Nate Chamberlain Scott Brewster(5003)
Ego Is the Enemy by Ryan Holiday(4956)
Management Strategies for the Cloud Revolution: How Cloud Computing Is Transforming Business and Why You Can't Afford to Be Left Behind by Charles Babcock(4438)
Python for ArcGIS Pro by Silas Toms Bill Parker(4173)
Elevating React Web Development with Gatsby by Samuel Larsen-Disney(3876)
Machine Learning at Scale with H2O by Gregory Keys | David Whiting(3611)
Learning C# by Developing Games with Unity 2021 by Harrison Ferrone(3284)
Speed Up Your Python with Rust by Maxwell Flitton(3231)
Liar's Poker by Michael Lewis(3220)
OPNsense Beginner to Professional by Julio Cesar Bueno de Camargo(3195)
Extreme DAX by Michiel Rozema & Henk Vlootman(3169)
Agile Security Operations by Hinne Hettema(3122)
Linux Command Line and Shell Scripting Techniques by Vedran Dakic and Jasmin Redzepagic(3108)
Essential Cryptography for JavaScript Developers by Alessandro Segala(3081)
Cryptography Algorithms by Massimo Bertaccini(3001)
AI-Powered Commerce by Andy Pandharikar & Frederik Bussler(2981)
